Nonmonotone and Perturbed Optimization

نویسنده

  • Mikhail V. Solodov
چکیده

The primary purpose of this research is the analysis of nonmonotone optimization algorithms to which standard convergence analysis techniques do not apply. We consider methods that are inherently nonmonotone, as well as nonmono-tonicity induced by data perturbations or inexact subproblem solution. One of the principal applications of our results is the analysis of gradient-type methods that process the data incrementally. The computational signiicance of these algorithms is well documented in the neural networks literature. Such algorithms are known to be particularly well-suited for large data sets, as well as for real-time applications. One of the most important methods of this type is the classical online backpropagation (BP) algorithm for training artiicial neural networks. Neural networks constitute a large interdisciplinary area of research within the broader area of machine learning that has found applications in many branches of science and technology. However, much of the work in the area has been based on heuristic concepts and trial-and-error experimentation. This research lls some of the existing theoretical gaps. In particular, we obtain the rst deterministic convergence results for the BP algorithm and its various ii practically important modiications. We also investigate error-stability properties of the generalized gradient projection method. When specialized to neural network training, our general results allow us to establish stability of BP in the presence of noise, and give its precise characterization. We also outline applications to weight perturbation training. In a classical optimization setting, some new results are derived for a perturbed generalized gradient projection method applied to convex and weakly sharp problems. Next we develop a general approach to convergence analysis of feasible descent methods in the presence of perturbations. The important novel feature of our analysis is that perturbations need not tend to zero in the limit. In this case, standard convergence analysis techniques are not applicable, and we present a new approach. It is shown that a certain "-approximate solution can be obtained , where " depends on the level of perturbations linearly. Applications to the gradient projection, proximal minimization and extragradient algorithms are described. We also consider a practical generalization of the parallel variable distribution algorithm of Ferris and Mangasarian. In particular, our generalization is twofold : we propose an asynchronous algorithm, and allow inexact subproblem solution. We show that the generalized method retains all the attractive properties of the original method and yet is more practical. We also derive some stronger convergence results …

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Trust-region Method using Extended Nonmonotone Technique for Unconstrained Optimization

In this paper, we present a nonmonotone trust-region algorithm for unconstrained optimization. We first introduce a variant of the nonmonotone strategy proposed by Ahookhosh and Amini cite{AhA 01} and incorporate it into the trust-region framework to construct a more efficient approach. Our new nonmonotone strategy combines the current function value with the maximum function values in some pri...

متن کامل

Nonmonotone Hybrid Tabu Search for Inequalities and Equalities: an Experimental Study

The main goal of this paper is to analyze the behavior of nonmonotone hybrid tabu search approaches when solving systems of nonlinear inequalities and equalities through the global optimization of an appropriate merit function. The algorithm combines global and local searches and uses a nonmonotone reduction of the merit function to choose the local search. Relaxing the condition aims to call t...

متن کامل

Solving the Unconstrained Optimization Problems Using the Combination of Nonmonotone Trust Region Algorithm and Filter Technique

In this paper, we propose a new nonmonotone adaptive trust region method for solving unconstrained optimization problems that is equipped with the filter technique. In the proposed method, the various nonmonotone technique is used. Using this technique, the algorithm can advantage from nonmonotone properties and it can increase the rate of solving the problems. Also, the filter that is used in...

متن کامل

Backpropagation Convergence via Deterministic Nonmonotone Perturbed Minimization

The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error func...

متن کامل

A Nonmonotone Line Search Technique and Its Application to Unconstrained Optimization

A new nonmonotone line search algorithm is proposed and analyzed. In our scheme, we require that an average of the successive function values decreases, while the traditional nonmonotone approach of Grippo, Lampariello, and Lucidi [SIAM J. Numer. Anal., 23 (1986), pp. 707–716] requires that a maximum of recent function values decreases. We prove global convergence for nonconvex, smooth function...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995